ECE 4554/ ECE 5554 / Computer Vision

This file contains Problem 2, which is the coding portion of Homework 4. Your job is to implement/modify the sections within this notebook that are marked with "TO DO".

TO DO: Enter your Virginia Tech Username (PID) here: 906465121

Honor Code reminder

This is an individual assignment, not a team project. Please review the Honor Code statement in the syllabus.

Submission guidelines for the coding problems (Google Colab)

  1. Please verify that you have entered your Virginia Tech Username in all of the appropriate places.
  2. After clicking Runtime->Run all, verify that all of your solutions are visible in this notebook.
  3. Click File->Save near the top of the page to save the latest version of your notebook at Google Drive.
  4. Verify that the last 2 cells have executed, creating a HTML version of this notebook at Google Drive.)
  5. Look at the HTML file and check that all of your solutions are displayed correctly there.
  6. Download your notebook file and the HTML version to your laptop.
  7. On your laptop, create a ZIP version of this notebook file. Use file name Homework4_Code_USERNAME.zip, with your own Username.
  8. For your HTML version, use file name Homework4_Notebook_USERNAME.html, with your own Username.
  9. Submit these 2 files and your PDF file for the written problems SEPARATELY to Canvas. Do not zip them all together.

Overview

Your goal is to implement the KLT tracking tracking algorithm. Write your code without using OpenCV functions, except for a few that are indicated in the instructions below. Approved OpenCV functions tend to be for low-level operations such as file I/O, image format manipulation (e.g., color/grayscale conversion), and highlighting image locations by drawing geometric shapes (e.g., circles).

You may use any code that you have written for previous assignments, such as your linear_filter function and edge-detection code from HW2. Basic math functions from Python/NumPy are also allowed.

Environment setup

In [5]:
# Mount your Google Drive to this notebook
# The purpose is to allow your code to access to your files
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
In [7]:
# Change the directory to your own working directory
# Any files under your working directory are available to your code
# TO DO: enter the name of your directory
import os
os.chdir('/content/drive/MyDrive/Comp Vision')
In [8]:
# Import library modules
import sys
import cv2
import numpy as np
import matplotlib.pyplot as plt
# PIL is the Python Imaging Library
from PIL import Image  
# The following is a substitute for cv2.imshow, which Colab does not allow
from google.colab.patches import cv2_imshow

print('Python version:', sys.version)
print('OpenCV version:', cv2.__version__)
print('NumPy version: ', np.__version__)
Python version: 3.7.12 (default, Sep 10 2021, 00:21:48) 
[GCC 7.5.0]
OpenCV version: 4.1.2
NumPy version:  1.19.5

Getting started

You have been given a folder that contains the "hotel" image sequence. This sequence is a set of 51 images: hotel.seq00.png, ..., hotel.seq50.png. If you display them in quick succession you will see a toy hotel undergo rotation and a small amount of translation. Keep these files inside the folder hotel_images, and upload that folder to your working directory at Colab.

Use the following code block to verify a correct upload, and to illustrate keypoint detection. This example finds Shi-Tomasi feature points in image 0 of the sequence, and indicates their locations by small green circles.

In [9]:
# The purpose of this code block is to verify that you can access the hotel 
#  images, and to illustrate examples of keypoints.
# Parts of this example were borrowed from 
#  https://docs.opencv.org/4.x/d4/d8c/tutorial_py_shi_tomasi.html

DISPLAY_RADIUS = 3
DISPLAY_COLOR  = (0, 255, 0)

def keypointDetectionDemo(im0):
  # find image locations that may be good for tracking
  feature_params = dict( maxCorners = 300,
                       qualityLevel = 0.2, 
                       minDistance = 7,  
                       blockSize = 5 )     
  p0 = cv2.goodFeaturesToTrack(im0, mask = None, **feature_params)
  # now corners should contain an array of (floating point) pixel locations 
  if p0 is None:
    print("no keypoints were found!")
    return
  print (f'Number of detected keypoints = {p0.shape[0]}')

  # convert to kx2 format, where k is the number of feature points
  corners = np.zeros((p0.shape[0],2))
  for i in range(corners.shape[0]):
    corners[i] = p0[i][0]

  # draw a small circle at each detected point and display the result
  im0color = cv2.cvtColor(im0, cv2.COLOR_GRAY2BGR)
  cornersInt = np.intp(np.round(corners)) # convert to integers used for indexing 
  for i in cornersInt:
    x, y = i.ravel()      # returns a contiguous flattened array
    cv2.circle(im0color, (x, y), DISPLAY_RADIUS, DISPLAY_COLOR)

  cv2_imshow(im0color)
  return
  
# load and display a sample image, detect features, and display the results
im0 = cv2.imread("hotel_images/hotel.seq00.png", cv2.COLOR_BGR2GRAY)
keypointDetectionDemo(im0)
Number of detected keypoints = 161

Tracking

Apply the Kanade-Lucas-Tomasi tracking procedure to track the keypoints that were detected in the previous code example. Those keypoints were detected in image 0 of the hotel sequence, and the goal is to track them throughout images 1 to 50.

The KLT procedure operates on a pair of images that were captured at times t and t+1. For a given keypoint at (x, y) at time t, the procedure tries to find the new location (x', y') of the same keypoint at time t+1. For reference, the following pseudocode is from the lecture slides.
hwScreenshot 2021-11-22 164413.png

The KLT tracking procedure assumes small movements of keypoints from t to t+1. For this reason, tracking needs to be performed at subpixel resolution. Keypoint locations therefore need to be maintained using floating point values.

The matrix equation depends on spatial gradients, which can be noisy. For this reason, you should perform smoothing of the spatial gradients when computing these matrices.

You are allowed to use the OpenCV function getRectSubPix, which takes an image location at subpixel resolution, and interpolates neighboring pixel values to return a small region of interest from the image. For computing the summations in the pseudocode, it is suggested that you use a window size of 15x15 surrounding the keypoint. Some useful functions here are np.sum, np.multiply, etc.

The first function that you should implement is getNextPoints in the next code block. It should apply the KLT procedure using a pair of images for a given collection of feature points. For each feature point in an image at time t, the procedure tries to find the location of the corresponding point in an image at time t+1.

It is likely that some of the keypoints will eventually move out of the image frame as they are tracked over the entire sequence. To handle these situations in a simple way, it is suggested that you maintain a movedOutFlag vector of True/False values. If an element in movedOutFlag is False, then the associated keypoint is inside the image frame and should be processed; but if True, then getNextPoints should ignore that keypoint.

In [10]:
def linear_filter(img_in, kernel):
  shape = kernel.shape
  buf = int(shape[0]/2)

  img_out = img_in.copy()
  for i in range (buf,img_in.shape[0]-shape[0]-1):
    for j in range (buf,img_in.shape[1]-shape[1]-1):
      sum = float(0) 
      for a in range(shape[0]):
        for b in range(shape[1]):
          sum = sum + (img_in[i+a,j+b]*kernel[a,b])
          
      img_out[i+buf,j+buf] =min(255,abs(sum))
  

  return img_out # Each pixel must be of type np.float32


def getNextPoints(im1, im2, xy, movedOutFlag):
  '''Track keypoints from image im1 to image im2
     Input:
      im1: grayscale image at time t; shape (m, n)
      im2: grayscale image at time t+1; shape (m, n)
      xy:  a numpy array of size kx2, where k is the number of keypoints
       Each keypoint is of the form [x, y], with both in floating-point format
     Output:
      xy2: updated keypoint locations; same format as xy input 
      movedOutFlag: array of True/False values, of size kx1, 
       to indicate whether each associated
       keypoint has moved outside the dimensions of the image array

     TO DO: Implement the getNextPoints function.
  '''
  xy2 = np.copy(xy) 
  movedOutFlag = np.zeros(xy.shape[0],dtype=np.bool)

  #smoothing the image using gaussian blur
  imageGauss1 = cv2.GaussianBlur(im1,(7,7),0)
  imageGauss2 = cv2.GaussianBlur(im2,(7,7),0)

  RectWindow = (15,15)

  for i in range(xy.shape[0]):
    x = xy[i,0] 
    y = xy[i,1] 

    P   = cv2.getRectSubPix(imageGauss1,RectWindow,(x  , y)).astype(np.float)
    P_x  = cv2.getRectSubPix(imageGauss1,RectWindow,(x-1  , y)).astype(np.float)
    P_y  = cv2.getRectSubPix(imageGauss1,RectWindow,(x  , y-1)).astype(np.float)

    #getting the I x and y values
    I_x  = cv2.getRectSubPix(imageGauss1,RectWindow,(x+1, y)).astype(np.float)  - P_x
    I_y  = cv2.getRectSubPix(imageGauss1,RectWindow,(x  , y+1)).astype(np.float)- P_y

    #getting the I x^2, y^2 and xy values 
    I_x_2 = np.sum(np.sum(np.multiply(I_x,I_x).astype(np.float),0))
    I_y_2 = np.sum(np.sum(np.multiply(I_y,I_y).astype(np.float),0))
    I_x_y = np.sum(np.sum(np.multiply(I_x,I_y).astype(np.float),0))

    #forming the LHS matrix
    lhs = np.array([[I_x_2,I_x_y],[I_x_y,I_y_2]], dtype=np.int)
    uv_previous = np.array([x,y])
    
    #finding the lhs transpose
    if(np.linalg.det(lhs)!=0):
      lhs_T  = np.linalg.inv(lhs)
      numOfIteration=0

      #iterating it for 30 itmes so as to not go into a infinite loop
      while(numOfIteration<30):
        
        It = cv2.getRectSubPix(imageGauss2,RectWindow,(x,y)) - P  
        I_x_t = np.sum(np.sum(np.multiply(I_x,It).astype(np.float),0))
        I_y_t = np.sum(np.sum(np.multiply(I_y,It).astype(np.float),0))

        numOfIteration+= 1

        #finding the u,v by doing rhs*lhs^T
        uv  = np.matmul(lhs_T,-np.array([I_x_t,I_y_t]))

        xy2[i] = (x,y) 
        y = y+uv[1]
        x = x+uv[0]
      
        #to check of the point has moved if so continue
        if(uv[0]==uv_previous[0] and uv[1]==uv_previous[1]):
          break

        uv_previous[1] = uv[1]
        uv_previous[0] = uv[0]
        
        #checking if the point has moves outside the output image
        if np.all((0<=y<=im1.shape[1]) and (0<=x<=im1.shape[0])):
          movedOutFlag[i] = False
        else: 
          movedOutFlag[i] = True

  return ( xy2, movedOutFlag )

Test #1

Use the following code block to test your getNextPoints function using only a few keypoints. If you would like to add more tests, place the new code after the code that is already present. If your code is working correctly, then this test should display the keypoints at correct locations in both images.

In [11]:
# Test getNextPoints with only a few keypoints, and with only one pair of images

im1 = cv2.imread("hotel_images/hotel.seq00.png", cv2.COLOR_BGR2GRAY)
im2 = cv2.imread("hotel_images/hotel.seq01.png", cv2.COLOR_BGR2GRAY)
# a few example keypoints
xy = np.array([[192., 263.], [245., 281.], [ 31., 414.]]) 
# initialize this flag to False, to indicate that all keypoints are inside the image
movedOutFlag = np.zeros(xy.shape[0])

xy2, movedOutFlag = getNextPoints(im1, im2, xy, movedOutFlag)
print (f'initial keypoints = {xy}')
print (f'updated keypoints = {xy2}')
print (f'updated flag = {movedOutFlag}')

# For both images, draw a small circle at each keypoint and display the result
DISPLAY_RADIUS = 3
DISPLAY_COLOR = (0, 255, 0)
im1color = cv2.cvtColor(im1, cv2.COLOR_GRAY2BGR)
corners = np.intp(np.round(xy))
for i in corners:
  x, y = i.ravel()
  cv2.circle(im1color, (x, y), DISPLAY_RADIUS, DISPLAY_COLOR)
cv2_imshow(im1color)

im2color = cv2.cvtColor(im2, cv2.COLOR_GRAY2BGR)
corners = np.intp(np.round(xy2))
for i in corners:
  x, y = i.ravel()
  cv2.circle(im2color, (x, y), DISPLAY_RADIUS, DISPLAY_COLOR)
cv2_imshow(im2color)
initial keypoints = [[192. 263.]
 [245. 281.]
 [ 31. 414.]]
updated keypoints = [[192.17744573 263.12756513]
 [245.34531095 281.33925752]
 [ 27.45539382 412.81246638]]
updated flag = [False False False]

Now that you have tested your tracking code with on a small scale, it is time to work with more keypoints and longer image sequences.

The next code block provides a suggested structure for the overall tracking solution. The top-level function is mainFunction, which should generate the required results after you have finished. Examine this top-level function to see how the other functions fit together. Read the comment blocks for more detailed descriptions.

Write additional code to complete the the next code block. The only parts that you should need to update are marked "TO DO". The grader should only need to run mainFunction in order to generate and see your final KLT tracking results. Write the code so that it automatically displays the following:

  • images showing tracked keypoint locations for t=1 and t=11 and t=50, and possibly for a few more values of t (but do not display all of the images in your final output)
  • at the end, display image 0 that has been updated to show 2D paths that were taken by each keypoint during tracking (an example is given later)

(You are allowed to make further modifications to the code. But in those cases, please provide detailed comments to explain your reasons for making the changes.)

In [12]:
# KLT TRACKING - main code block

# global variables - try to write you code without needing additional globals
#NUMBER_OF_IMAGES = 51  # try smaller values here for initial testing
NUMBER_OF_IMAGES = 51
DISPLAY_RADIUS = 3
GREEN = (0, 255, 0)
YELLOW = (0, 255, 255)

def mainFunction():
  '''This is the main "driver" function that performs KLT tracking
  '''
  print("In mainFunction")
  allImgs = readImages(NUMBER_OF_IMAGES)
  print (f'number of images that were read = {len(allImgs)}')

  # get initial keypoints from image 0
  image0 = allImgs[0]
  xy = getKeypoints(image0)
  if xy is None:
    print("no points to track!")
    return
  print (f'number of detected keypoints = {xy.shape[0]}')

  # display keypoints for image 0
  image0color = cv2.cvtColor(image0, cv2.COLOR_GRAY2BGR)
  corners = np.intp(np.round(xy))  
  for i in corners:
    x, y = i.ravel()
    cv2.circle(image0color, (x, y), DISPLAY_RADIUS, GREEN)

  # track the initial keypoints through all remaining images
  xyt = trackPoints(xy, allImgs)

  # in image 0, draw the paths taken by the keypoints 
  drawPaths(image0color, xyt)
  return


def readImages(filecount):
  '''Read a sequence of image files, starting with image 0 in 'hotel' sequence
     Input:
      filecount: how many image files to read
     Output:
      allImages: a list of OpenCV images in sequential order 
  '''
  print("In function readImages")
  allImages = []
  for i in range(filecount):
    print (f'reading image {i:02}')
    imagetmp = cv2.imread("hotel_images/hotel.seq" + f'{i:02d}' + ".png", cv2.COLOR_BGR2GRAY)
    allImages.append(imagetmp)
  return allImages


def getKeypoints(im0):
  '''Find keypoints that will be good for tracking;
     you are allowed to copy code directly from keypointDetectionDemo
     Input:
      im0: grayscale source image with shape (m, n)
     Output:
      corners: a numpy array of size kx2, where k is the number of keypoints. 
       Each keypoint is of the form [x, y], where x and y are floating-point
       values that represent image locations in the (horizontal, vertical) 
       directions, respectively. 
  '''
  print ("In function getKeypoints")
  # replace the following line; it is just a placeholder

  feature_params = dict( maxCorners = 300,
                       qualityLevel = 0.2, 
                       minDistance = 7,  
                       blockSize = 5 )     
  p0 = cv2.goodFeaturesToTrack(im0, mask = None, **feature_params)
  # now corners should contain an array of (floating point) pixel locations 
  if p0 is None:
    print("no keypoints were found!")
    return
  print (f'Number of detected keypoints = {p0.shape[0]}')

  # convert to kx2 format, where k is the number of feature points
  corners = np.zeros((p0.shape[0],2))
  for i in range(corners.shape[0]):
    corners[i] = p0[i][0]


  return corners
  

def trackPoints(xy, imageSequence):
  '''Track keypoints through the given image sequence
     Input:
      xy: a numpy array containing keypoints for the first image in imageSequence;
       format is identical to 'corners' in getKeypoints
      imageSequence: a list of OpenCV images in sequential order
     Output:
      xyt: <any format that you see fit, as needed by the drawPaths function>

     TO DO: Update the trackPoints function
  '''
  print ("In function trackPoints")
  print (f'length of imageSequence = {len(imageSequence)}')
  movedOutFlag = np.zeros(xy.shape[0])
  # initialize xyt to contain any information that is needed for drawing paths at the end of tracking
  # also add code in this function as needed to maintain xyt
  xyt = []

  for t in range(0, len(imageSequence)-1): # predict for all images except first in sequence
    print (f't = {t}; predicting for t = {t+1}') 
    xy2, movedOutFlag = getNextPoints(imageSequence[t], imageSequence[t+1], xy, movedOutFlag)
    xy = xy2

    corners = np.intp(np.round(xy2))  
    for c in range(0, corners.shape[0]):
      if movedOutFlag[c] == False:
        y = corners[c][1]         
        x = corners[c][0]
        
        xyt.append([x,y]) 
        

    # for selected instants in time, display the latest image with highlighted keypoints 
    if ((t == 0) or (t == 10) or (t == 20) or (t == 30) or (t == 40) or (t == 49)):
      im2color = cv2.cvtColor(imageSequence[t+1], cv2.COLOR_GRAY2BGR)
      corners = np.intp(np.round(xy2))  
      for c in range(0, corners.shape[0]):
        if movedOutFlag[c] == False:
          x = corners[c][0]
          y = corners[c][1]
          cv2.circle(im2color, (x, y), DISPLAY_RADIUS, GREEN)
      cv2_imshow(im2color)
    
  return xyt

def drawPaths(im0color, xyt):
  '''In the given image, draw paths that were taken by each keypoint during tracking
     Input:
      im0color: a color image with shape (m, n, 3), typically the first image in a sequence
     Output:
      xyt: <any format that you see fit, as needed for drawing the paths>

     TO DO: Implement the drawPaths function
  '''
  print ("In function drawPaths")

  for i in range(len(xyt)-1):
    try:
      im0color[xyt[i][1], xyt[i][0], :] = YELLOW
    except:
      pass

  print ("FINISHED: here are the paths of the tracked keypoints")
  cv2_imshow(im0color)

Test #2

The grader should only need to run mainFunction in the next block in order to run your code and see the required output.

In [13]:
# Run the KLT tracking code
mainFunction()
In mainFunction
In function readImages
reading image 00
reading image 01
reading image 02
reading image 03
reading image 04
reading image 05
reading image 06
reading image 07
reading image 08
reading image 09
reading image 10
reading image 11
reading image 12
reading image 13
reading image 14
reading image 15
reading image 16
reading image 17
reading image 18
reading image 19
reading image 20
reading image 21
reading image 22
reading image 23
reading image 24
reading image 25
reading image 26
reading image 27
reading image 28
reading image 29
reading image 30
reading image 31
reading image 32
reading image 33
reading image 34
reading image 35
reading image 36
reading image 37
reading image 38
reading image 39
reading image 40
reading image 41
reading image 42
reading image 43
reading image 44
reading image 45
reading image 46
reading image 47
reading image 48
reading image 49
reading image 50
number of images that were read = 51
In function getKeypoints
Number of detected keypoints = 161
number of detected keypoints = 161
In function trackPoints
length of imageSequence = 51
t = 0; predicting for t = 1
t = 1; predicting for t = 2
t = 2; predicting for t = 3
t = 3; predicting for t = 4
t = 4; predicting for t = 5
t = 5; predicting for t = 6
t = 6; predicting for t = 7
t = 7; predicting for t = 8
t = 8; predicting for t = 9
t = 9; predicting for t = 10
t = 10; predicting for t = 11
t = 11; predicting for t = 12
t = 12; predicting for t = 13
t = 13; predicting for t = 14
t = 14; predicting for t = 15
t = 15; predicting for t = 16
t = 16; predicting for t = 17
t = 17; predicting for t = 18
t = 18; predicting for t = 19
t = 19; predicting for t = 20
t = 20; predicting for t = 21
t = 21; predicting for t = 22
t = 22; predicting for t = 23
t = 23; predicting for t = 24
t = 24; predicting for t = 25
t = 25; predicting for t = 26
t = 26; predicting for t = 27
t = 27; predicting for t = 28
t = 28; predicting for t = 29
t = 29; predicting for t = 30
t = 30; predicting for t = 31
t = 31; predicting for t = 32
t = 32; predicting for t = 33
t = 33; predicting for t = 34
t = 34; predicting for t = 35
t = 35; predicting for t = 36
t = 36; predicting for t = 37
t = 37; predicting for t = 38
t = 38; predicting for t = 39
t = 39; predicting for t = 40
t = 40; predicting for t = 41
t = 41; predicting for t = 42
t = 42; predicting for t = 43
t = 43; predicting for t = 44
t = 44; predicting for t = 45
t = 45; predicting for t = 46
t = 46; predicting for t = 47
t = 47; predicting for t = 48
t = 48; predicting for t = 49
t = 49; predicting for t = 50
In function drawPaths
FINISHED: here are the paths of the tracked keypoints

Here is an example output from drawPaths. With close examination, you can see that it is not perfect for all keypoints. It is okay if your result is not perfect, either. (But try to make your output at least this good.)

tracesGreendownload.png

In [14]:
def getKeypoints(ImageInput):

  print ("In self implemented getKeypoints function")
  edges = []

  ImageInput = cv2.GaussianBlur(ImageInput, (5,5), 0.2)

  I_y, I_x = np.gradient(ImageInput)

  I_x_x = I_x*I_x
  I_y_y = I_y*I_y
  I_x_y = I_x*I_y

  k = 0.05
  negate = 2

  height, width = I_x_x.shape[0], I_x_x.shape[1]
  R = np.zeros((height, width))
  for y in range(negate, height-negate):
    for x in range(negate, width-negate):
        S_x_x = np.sum(I_x_x[y-negate:y+1+negate, x-negate:x+1+negate])
        S_y_y = np.sum(I_y_y[y-negate:y+1+negate, x-negate:x+1+negate])
        S_x_y = np.sum(I_x_y[y-negate:y+1+negate, x-negate:x+1+negate])

        detH = (S_x_x * S_y_y) - (S_x_y**2)
        traceH = S_x_x + S_y_y

        R[y,x] = detH - k * traceH**2

  for y in range(negate, height-negate, 5):
    for x in range(negate, width-negate, 5):
      r = np.max(R[y-negate:y+negate, x-negate:x+negate])
      if r > 4*1e6:
        edges.append([x, y])

  edges = np.array(edges)
  edges = np.intp(np.round(edges))
  edges = np.float32(edges)

  return edges
In [15]:
mainFunction()
In mainFunction
In function readImages
reading image 00
reading image 01
reading image 02
reading image 03
reading image 04
reading image 05
reading image 06
reading image 07
reading image 08
reading image 09
reading image 10
reading image 11
reading image 12
reading image 13
reading image 14
reading image 15
reading image 16
reading image 17
reading image 18
reading image 19
reading image 20
reading image 21
reading image 22
reading image 23
reading image 24
reading image 25
reading image 26
reading image 27
reading image 28
reading image 29
reading image 30
reading image 31
reading image 32
reading image 33
reading image 34
reading image 35
reading image 36
reading image 37
reading image 38
reading image 39
reading image 40
reading image 41
reading image 42
reading image 43
reading image 44
reading image 45
reading image 46
reading image 47
reading image 48
reading image 49
reading image 50
number of images that were read = 51
In self implemented getKeypoints function
number of detected keypoints = 165
In function trackPoints
length of imageSequence = 51
t = 0; predicting for t = 1
t = 1; predicting for t = 2
t = 2; predicting for t = 3
t = 3; predicting for t = 4
t = 4; predicting for t = 5
t = 5; predicting for t = 6
t = 6; predicting for t = 7
t = 7; predicting for t = 8
t = 8; predicting for t = 9
t = 9; predicting for t = 10
t = 10; predicting for t = 11
t = 11; predicting for t = 12
t = 12; predicting for t = 13
t = 13; predicting for t = 14
t = 14; predicting for t = 15
t = 15; predicting for t = 16
t = 16; predicting for t = 17
t = 17; predicting for t = 18
t = 18; predicting for t = 19
t = 19; predicting for t = 20
t = 20; predicting for t = 21
t = 21; predicting for t = 22
t = 22; predicting for t = 23
t = 23; predicting for t = 24
t = 24; predicting for t = 25
t = 25; predicting for t = 26
t = 26; predicting for t = 27
t = 27; predicting for t = 28
t = 28; predicting for t = 29
t = 29; predicting for t = 30
t = 30; predicting for t = 31
t = 31; predicting for t = 32
t = 32; predicting for t = 33
t = 33; predicting for t = 34
t = 34; predicting for t = 35
t = 35; predicting for t = 36
t = 36; predicting for t = 37
t = 37; predicting for t = 38
t = 38; predicting for t = 39
t = 39; predicting for t = 40
t = 40; predicting for t = 41
t = 41; predicting for t = 42
t = 42; predicting for t = 43
t = 43; predicting for t = 44
t = 44; predicting for t = 45
t = 45; predicting for t = 46
t = 46; predicting for t = 47
t = 47; predicting for t = 48
t = 48; predicting for t = 49
t = 49; predicting for t = 50
In function drawPaths
FINISHED: here are the paths of the tracked keypoints

Creating a HTML version of your current notebook

In [17]:
# TO DO: Provide the full path to your Jupyter notebook file
!jupyter nbconvert --to html "/content/drive/MyDrive/Comp Vision/Homework4_USERNAME.ipynb"
[NbConvertApp] WARNING | pattern u'/content/drive/MyDrive/ECE5554_Fall2021/Homework4/Homework4_USERNAME.ipynb' matched no files
This application is used to convert notebook files (*.ipynb) to various other
formats.

WARNING: THE COMMANDLINE INTERFACE MAY CHANGE IN FUTURE RELEASES.

Options
-------

Arguments that take values are actually convenience aliases to full
Configurables, whose aliases are listed on the help line. For more information
on full configurables, see '--help-all'.

--execute
    Execute the notebook prior to export.
--allow-errors
    Continue notebook execution even if one of the cells throws an error and include the error message in the cell output (the default behaviour is to abort conversion). This flag is only relevant if '--execute' was specified, too.
--no-input
    Exclude input cells and output prompts from converted document. 
    This mode is ideal for generating code-free reports.
--stdout
    Write notebook output to stdout instead of files.
--stdin
    read a single notebook file from stdin. Write the resulting notebook with default basename 'notebook.*'
--inplace
    Run nbconvert in place, overwriting the existing notebook (only 
    relevant when converting to notebook format)
-y
    Answer yes to any questions instead of prompting.
--clear-output
    Clear output of current file and save in place, 
    overwriting the existing notebook.
--debug
    set log level to logging.DEBUG (maximize logging output)
--no-prompt
    Exclude input and output prompts from converted document.
--generate-config
    generate default config file
--nbformat=<Enum> (NotebookExporter.nbformat_version)
    Default: 4
    Choices: [1, 2, 3, 4]
    The nbformat version to write. Use this to downgrade notebooks.
--output-dir=<Unicode> (FilesWriter.build_directory)
    Default: ''
    Directory to write output(s) to. Defaults to output to the directory of each
    notebook. To recover previous default behaviour (outputting to the current
    working directory) use . as the flag value.
--writer=<DottedObjectName> (NbConvertApp.writer_class)
    Default: 'FilesWriter'
    Writer class used to write the  results of the conversion
--log-level=<Enum> (Application.log_level)
    Default: 30
    Choices: (0, 10, 20, 30, 40, 50, 'DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL')
    Set the log level by value or name.
--reveal-prefix=<Unicode> (SlidesExporter.reveal_url_prefix)
    Default: u''
    The URL prefix for reveal.js (version 3.x). This defaults to the reveal CDN,
    but can be any url pointing to a copy  of reveal.js.
    For speaker notes to work, this must be a relative path to a local  copy of
    reveal.js: e.g., "reveal.js".
    If a relative path is given, it must be a subdirectory of the current
    directory (from which the server is run).
    See the usage documentation
    (https://nbconvert.readthedocs.io/en/latest/usage.html#reveal-js-html-
    slideshow) for more details.
--to=<Unicode> (NbConvertApp.export_format)
    Default: 'html'
    The export format to be used, either one of the built-in formats
    ['asciidoc', 'custom', 'html', 'latex', 'markdown', 'notebook', 'pdf',
    'python', 'rst', 'script', 'slides'] or a dotted object name that represents
    the import path for an `Exporter` class
--template=<Unicode> (TemplateExporter.template_file)
    Default: u''
    Name of the template file to use
--output=<Unicode> (NbConvertApp.output_base)
    Default: ''
    overwrite base name use for output files. can only be used when converting
    one notebook at a time.
--post=<DottedOrNone> (NbConvertApp.postprocessor_class)
    Default: u''
    PostProcessor class used to write the results of the conversion
--config=<Unicode> (JupyterApp.config_file)
    Default: u''
    Full path of a config file.

To see all available configurables, use `--help-all`

Examples
--------

    The simplest way to use nbconvert is
    
    > jupyter nbconvert mynotebook.ipynb
    
    which will convert mynotebook.ipynb to the default format (probably HTML).
    
    You can specify the export format with `--to`.
    Options include ['asciidoc', 'custom', 'html', 'latex', 'markdown', 'notebook', 'pdf', 'python', 'rst', 'script', 'slides'].
    
    > jupyter nbconvert --to latex mynotebook.ipynb
    
    Both HTML and LaTeX support multiple output templates. LaTeX includes
    'base', 'article' and 'report'.  HTML includes 'basic' and 'full'. You
    can specify the flavor of the format used.
    
    > jupyter nbconvert --to html --template basic mynotebook.ipynb
    
    You can also pipe the output to stdout, rather than a file
    
    > jupyter nbconvert mynotebook.ipynb --stdout
    
    PDF is generated via latex
    
    > jupyter nbconvert mynotebook.ipynb --to pdf
    
    You can get (and serve) a Reveal.js-powered slideshow
    
    > jupyter nbconvert myslides.ipynb --to slides --post serve
    
    Multiple notebooks can be given at the command line in a couple of 
    different ways:
    
    > jupyter nbconvert notebook*.ipynb
    > jupyter nbconvert notebook1.ipynb notebook2.ipynb
    
    or you can specify the notebooks list in a config file, containing::
    
        c.NbConvertApp.notebooks = ["my_notebook.ipynb"]
    
    > jupyter nbconvert --config mycfg.py